Concurrency Models, Event Loop & Race Conditions
Threading Overheads (Advanced)
Context Switching
-
Switching between threads requires:
- Saving CPU registers
- Updating thread state (bookkeeping)
- Loading next thread’s registers
-
Cost:
- ~1–10 microseconds per switch
-
With many threads:
- Thousands of switches → milliseconds wasted
-
Problem:
- Context switching does no useful work
- Reduces overall system performance
Limitation of Thread-per-Request Model
-
Many threads → high:
- Memory usage
- Context switching overhead
-
Result:
- Poor scalability under high concurrency
Event Loop Model (Alternative to Threads)
Core Idea
-
Use single thread + non-blocking IO
-
Tasks:
- Never block
- Give up control when waiting
How Event Loop Works
Step-by-Step
-
Task starts execution
-
Hits IO (e.g., DB query)
-
Registers a callback
-
Yields control to event loop
-
Event loop:
- Monitors IO completion
- Resumes task when ready
Key Rule
-
❗ Never block the event loop
-
If blocked:
- Entire system stalls
- No other task can run
Event Loop Internals
-
Maintains:
- Task queues
- Callback queues
-
Uses OS-level mechanisms:
- Linux:
epoll - Mac:
kqueue
- Linux:
-
Loop cycle:
- Check IO completion
- Execute callbacks
- Repeat
Why Event Loop is Efficient
-
Single thread → no context switching
-
No thread stacks → low memory usage
-
Ideal for:
- IO-bound workloads
Trade-off of Event Loop
- Poor for CPU-heavy tasks
Example Problem
-
If task takes 100ms CPU:
- Entire loop blocked for 100ms
- All requests delayed
Async/Await Explained
What Happens Internally
-
await:- Pauses function
- Registers callback
- Returns control to event loop
Mental Model
- Async function = State Machine
Example flow:
- State 0 → Start
- Await DB → pause
- Resume → State 1
- Await next IO → pause
- Resume → State 2
Key Insight
await≠ blocking thread- It = splitting function into states
Callback vs Async/Await
Callback Style (Old)
- Nested functions
- Hard to read
- “Callback hell”
Async/Await (Modern)
- Cleaner syntax
- Same underlying behavior
- Still uses callbacks internally
Threading vs Event Loop (Execution Flow)
Threading Model
-
Each request:
- Runs in separate thread
-
On IO:
- Thread blocks
- OS switches thread
Event Loop Model
-
Single thread:
- Handles all requests
-
On IO:
- Task yields
- Event loop switches instantly
Go Concurrency Model (Hybrid Approach)
Go Routines (Virtual Threads)
- Lightweight threads managed by Go runtime
- Created per request
Key Concept
- Not OS threads
- Managed by Go scheduler
How Go Scheduler Works
Structure
-
OS threads (limited)
-
Each thread has:
- Queue of goroutines
Execution Flow
- Goroutine starts
- Hits IO → pauses
- Scheduler switches to another goroutine
- Resume later
Why Go is Efficient
-
Goroutines are:
- Lightweight
- Cheap to create
-
Context switching:
- Very fast (pointer switch)
-
Can run:
- Thousands to millions of goroutines
Comparison of Models
Threading
- Heavy
- OS-managed
- High overhead
Event Loop
- Single thread
- Very efficient for IO
- Bad for CPU-heavy work
Go Routines
- Hybrid model
- Lightweight + scalable
- Handles both IO + CPU better
Race Conditions (Critical Concept)
What is Race Condition?
- Multiple threads modify shared data simultaneously
- Leads to incorrect results
Example: Counter Problem
Expected:
- Counter = 0
- Two threads increment → should be 2
Actual:
- Counter becomes 1
Why?
-
Operations overlap:
- Read → Modify → Write
-
Updates overwrite each other
Lost Update Problem
- One thread’s update is lost
- Common concurrency bug
Important Insight
- Even single-threaded async systems can have race conditions
Example: Bank Withdrawal
Scenario
- Balance = 100
- Two withdrawals of 100
Execution Flow
- First request checks → valid
- Second request checks → valid
- Both proceed
- Final balance = -100 ❌
Why This Happens
- Async yields control at
await - Other operations interleave
- Shared state becomes inconsistent
Solutions to Race Conditions
1. Locks / Mutex
- Only one thread enters critical section
lock.acquire()
# critical section
lock.release()
2. Message Passing (Go Channels)
- Avoid shared memory
- Use communication instead
Key Takeaways
1. Context Switching is Costly
- Too many threads → performance loss
2. Event Loop is Best for IO
- Lightweight
- Efficient
- Requires non-blocking code
3. Go Routines = Best of Both Worlds
- Lightweight like event loop
- Flexible like threads
4. Race Conditions Exist Everywhere
- Threads ✔
- Async/await ✔
- Even single-threaded systems ✔
5. Golden Rule
- Avoid shared mutable state
Final Summary
-
Concurrency:
- Keeps CPU busy during IO
-
Parallelism:
- Speeds up CPU-heavy tasks
-
Backend systems:
- Mostly IO-bound → prefer concurrency models
If you want, I can next:
- Compare C++ vs Go vs Node.js concurrency models (INTERVIEW GOLD)
- Or give practical debugging tips for latency (like your 200ms issue)